[fix](paimon) Fix Paimon DLF catalog caching issue by adding dlf.catalog.id to cache key#55875
Merged
morningman merged 1 commit intoapache:masterfrom Sep 10, 2025
Merged
Conversation
Contributor
|
Thank you for your contribution to Apache Doris. Please clearly describe your PR:
|
Member
Author
|
run buildall |
CalvinKirs
approved these changes
Sep 10, 2025
Contributor
|
PR approved by at least one committer and no changes requested. |
Contributor
|
PR approved by anyone and no changes requested. |
TPC-H: Total hot run time: 34765 ms |
TPC-DS: Total hot run time: 189124 ms |
ClickBench: Total hot run time: 30.07 s |
Contributor
FE UT Coverage ReportIncrement line coverage |
Contributor
FE Regression Coverage ReportIncrement line coverage |
morningman
approved these changes
Sep 10, 2025
github-actions bot
pushed a commit
that referenced
this pull request
Sep 10, 2025
…log.id to cache key (#55875) ### What problem does this PR solve? #### Problem: Paimon's CachedClientPool uses a static cache with keys based on clientClassName, metastore.uris, and metastore type. For DLF catalogs, all these values are identical, causing different DLF catalogs with different dlf.catalog_id configurations to incorrectly share the same HMS client pool. This results in the last created catalog's configuration overriding previous ones. #### Root Cause: The cache key construction in CachedClientPool.extractKey() doesn't include DLF-specific configuration differences. Multiple catalogs with different dlf.catalog_id values generate identical cache keys, leading to client pool pollution. #### Solution: Add dlf.catalog_id to the cache key by configuring client-pool-cache.keys = "conf:dlf.catalog.id" in PaimonAliyunDLFMetaStoreProperties.appendCustomCatalogOptions(). This ensures each DLF catalog with a unique catalog_id gets its own HMS client pool.
Closed
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What problem does this PR solve?
Problem:
Paimon's CachedClientPool uses a static cache with keys based on clientClassName, metastore.uris, and metastore type. For DLF catalogs, all these
values are identical, causing different DLF catalogs with different dlf.catalog_id configurations to incorrectly share the same HMS client pool.
This results in the last created catalog's configuration overriding previous ones.
Root Cause:
The cache key construction in CachedClientPool.extractKey() doesn't include DLF-specific configuration differences. Multiple catalogs with different
dlf.catalog_id values generate identical cache keys, leading to client pool pollution.
Solution:
Add dlf.catalog_id to the cache key by configuring client-pool-cache.keys = "conf:dlf.catalog.id" in
PaimonAliyunDLFMetaStoreProperties.appendCustomCatalogOptions(). This ensures each DLF catalog with a unique catalog_id gets its own HMS client
pool.
Release note
None
Check List (For Author)
Test
Behavior changed:
Does this need documentation?
Check List (For Reviewer who merge this PR)